skip to main content


Search for: All records

Creators/Authors contains: "Hasan, A."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available August 1, 2024
  2. Cloud detection is an inextricable pre-processing step in remote sensing image analysis workflows. Most of the traditional rule-based and machine-learning-based algorithms utilize low-level features of the clouds and classify individual cloud pixels based on their spectral signatures. Cloud detection using such approaches can be challenging due to a multitude of factors including harsh lighting conditions, the presence of thin clouds, the context of surrounding pixels, and complex spatial patterns. In recent studies, deep convolutional neural networks (CNNs) have shown outstanding results in the computer vision domain. These methods are practiced for better capturing the texture, shape as well as context of images. In this study, we propose a deep learning CNN approach to detect cloud pixels from medium-resolution satellite imagery. The proposed CNN accounts for both the low-level features, such as color and texture information as well as high-level features extracted from successive convolutions of the input image. We prepared a cloud-pixel dataset of approximately 7273 randomly sampled 320 by 320 pixels image patches taken from a total of 121 Landsat-8 (30m) and Sentinel-2 (20m) image scenes. These satellite images come with cloud masks. From the available data channels, only blue, green, red, and NIR bands are fed into the model. The CNN model was trained on 5300 image patches and validated on 1973 independent image patches. As the final output from our model, we extract a binary mask of cloud pixels and non-cloud pixels. The results are benchmarked against established cloud detection methods using standard accuracy metrics.

     
    more » « less
  3. The accelerated warming conditions of the high Arctic have intensified the extensive thawing of permafrost. Retrogressive thaw slumps (RTSs) are considered as the most active landforms in the Arctic permafrost. An increase in RTSs has been observed in the Arctic in recent decades. Continuous monitoring of RTSs is important to understand climate change-driven disturbances in the region. Manual detection of these landforms is extremely difficult as they occur over exceptionally large areas. Only very few studies have explored the utility of very high spatial resolution (VHSR) commercial satellite imagery in the automated mapping of RTSs. We have developed deep learning (DL) convolution neural net (CNN) based workflow to automatically detect RTSs from VHRS satellite imagery. This study systematically compared the performance of different DLCNN model architectures and varying backbones. Our candidate CNN models include: DeepLabV3+, UNet, UNet++, Multi-scale Attention Net (MA-Net), and Pyramid Attention Network (PAN) with ResNet50, ResNet101 and ResNet152 backbones. The RTS modeling experiment was conducted on Banks Island and Ellesmere Island in Canada. The UNet++ model demonstrated the highest accuracy (F1 score of 87%) with the ResNet50 backbone at the expense of training and inferencing time. PAN, DeepLabV3, MaNet, and UNet, models reported mediocre F1 scores of 72%, 75%, 80%, and 81% respectively. Our findings unravel the performances of different DLCNNs in imagery-enabled RTS mapping and provide useful insights on operationalizing the mapping application across the Arctic.

     
    more » « less
  4. Abstract. The microtopography associated with ice wedge polygons (IWPs) governs the Arctic ecosystem from local to regional scales due to the impacts on the flow and storage of water and therefore, vegetation and carbon. Increasing subsurface temperatures in Arctic permafrost landscapes cause differential ground settlements followed by a series of adverse microtopographic transitions at sub decadal scale. The entire Arctic has been imaged at 0.5 m or finer resolution by commercial satellite sensors. Dramatic microtopographic transformation of low-centered into high-centered IWPs can be identified using sub-meter resolution commercial satellite imagery. In this exploratory study, we have employed a Deep Learning (DL)-based object detection and semantic segmentation method named the Mask R-CNN to automatically map IWPs from commercial satellite imagery. Different tundra vegetation types have distinct spectral, spatial, textural characteristics, which in turn decide the semantics of overlying IWPs. Landscape complexity translates to the image complexity, affecting DL model performances. Scarcity of labelled training images, inadequate training samples for some types of tundra and class imbalance stand as other key challenges in this study. We implemented image augmentation methods to introduce variety in the training data and trained models separately for tundra types. Augmentation methods show promising results but the models with separate tundra types seem to suffer from the lack of annotated data.

     
    more » « less
  5. Regional extent and spatiotemporal dynamics of Arctic permafrost disturbances remain poorly quantified. High spatial resolution commercial satellite imagery enables transformational opportunities to observe, map, and document the micro-topographic transitions occurring in Arctic polygonal tundra at multiple spatial and temporal frequencies. The entire Arctic has been imaged at 0.5 m or finer resolution by commercial satellite sensors. The imagery is still largely underutilized, and value-added Arctic science products are rare. Knowledge discovery through artificial intelligence (AI), big imagery, high performance computing (HPC) resources is just starting to be realized in Arctic science. Large-scale deployment of petabyte-scale imagery resources requires sophisticated computational approaches to automated image interpretation coupled with efficient use of HPC resources. In addition to semantic complexities, multitude factors that are inherent to sub-meter resolution satellite imagery, such as file size, dimensions, spectral channels, overlaps, spatial references, and imaging conditions challenge the direct translation of AI-based approaches from computer vision applications. Memory limitations of Graphical Processing Units necessitates the partitioning of an input satellite imagery into manageable sub-arrays, followed by parallel predictions and post-processing to reconstruct the results corresponding to input image dimensions and spatial reference. We have developed a novel high performance image analysis framework –Mapping application for Arctic Permafrost Land Environment (MAPLE) that enables the integration of operational-scale GeoAI capabilities into Arctic science applications. We have designed the MAPLE workflow to become interoperable across HPC architectures while utilizing the optimal use of computing resources.

     
    more » « less
  6. null (Ed.)
  7. null (Ed.)